Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Long- and short-term recommendation model and updating method based on knowledge graph preference attention network
Junhua GU, Shuai FAN, Ningning LI, Suqi ZHANG
Journal of Computer Applications    2022, 42 (4): 1079-1086.   DOI: 10.11772/j.issn.1001-9081.2021071242
Abstract460)   HTML27)    PDF (785KB)(174)       Save

Current research on knowledge graph recommendation mainly focus on model establishment and training. However, in practical applications, it is necessary to update the model regularly by using incremental updating method to adapt to the changes of preferences of new and old users. Because most of these models only use the users’ long-term interest representations for recommendation, do not consider the users’ short-term interests, and during the aggregation of neighborhood entities to obtain the item vector representation, the interpretability of the aggregation methods is insufficient, and there is the problem of catastrophic forgetting in the process of updating the model, a Knowledge Graph Preference ATtention network based Long- and Short-term recommendation (KGPATLS) model and its updating method were proposed. Firstly, the aggregation method of preference attention network and the user representation method combining users’ long- and short-term interests were proposed through KGPATLS model. Then, in order to alleviate the catastrophic forgetting problem during model update, an incremental updating method Fusing Predict Sampling and Knowledge Distillation (FPSKD) was proposed. The proposed model and incremental updating method were tested on MovieLens-1M and Last.FM datasets. Compared with the optimal baseline model Knowledge Graph Convolutional Network (KGCN), KGPATLS has the Area Under Curve (AUC) increased by 2.2% and 1.4% respectively and the Accuracy (Acc) increased by 2.5% and 2.9% on the two datasets respectively. Compared with three baseline incremental updating methods on the two datasets, the AUC and Acc indexes of FPSKD are better than those of Fine Tune and Random Sampling respectively, the training time index of FPSKD is reduced to about one eighth and one quarter of that of Full Batch respectively. Experimental results verify the performance of KGPATLS model and that FPSKD can update the model efficiently while maintaining the model performance.

Table and Figures | Reference | Related Articles | Metrics